Gilles Blanchard
Wednesday 4th November 2015
Change in Time: 3.00pm
Ground Floor Seminar Room
25 Howland Street, London, W1T 4JG
Convergence rates of for spectral regularization methods for statistical inverse learning problems
Consider an inverse problem of the form g = Af , where A is a known
operator between Hilbert function spaces, and assume that we observe g
at some
randomly drawn points X_1,...,X_n which are i.i.d. according to some
distribution P_X, and where additionally each observation is subject
to a random independent noise. The goal is to recover the function
g. Here it is assumed that for each point x the evaluation mapping
f -> Af (x) is continuous. This setting as well as its relation to
random nonparametric regression and statistical learning with
reproducing kernels has been proposed and studied in particular in a
series of works
by Caponnetto, De Vito, Rosasco, and Odone (between others). In this
talk we will
first review this setting in some detail, as well as the principle
of estimation by so-called spectral methods. We will present some results
concerning convergence rates of such methods that extend and complete
previously known ones. In particular, we will consider the optimality,
from a statistical
point of view, of a general class of linear spectral methods.